1
برق و الکترونیک::
اتلاف بازسازی
It's really just like training a regular autoencoder, except you add noise to the inputs, and the reconstruction loss is calculated based on the original inputs:
In order to control the relative importance of the sparsity loss and the reconstruction loss, we can multiply the sparsity loss by a sparsity weight hyperparameter.
One simple trick can speed up convergence: instead of using the MSE, we can choose a reconstruction loss that will have larger gradients.
The first is the usual reconstruction loss that pushes the autoencoder to reproduce its inputs (we can use cross entropy for this, as discussed earlier).
To evaluate the performance of an autoencoder, one option is to measure the reconstruction loss (e.g.
واژگان شبکه مترجمین ایران